distribution alignment
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
Align Y our Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization
TPT does not explicitly align the pre-trained CLIP to become aware of the test sample distribution. For the effective test-time adaptation of V -L foundation models, it is crucial to bridge the distribution gap between the pre-training dataset and the downstream evaluation set for high zero-shot generalization.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Sweden > Östergötland County > Linköping (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia (0.04)
CooperativeDistributionAlignment viaJSDUpperBound
Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to ashared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- (8 more...)
- Research Report > Experimental Study (0.93)
- Workflow (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Information Management (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Information Retrieval (0.47)
06964dce9addb1c5cb5d6e3d9838f733-AuthorFeedback.pdf
We thank the reviewers for their feedback. We will reflect reviewer's comments and our response in the revision. Reviewers showed concern on the novelty and the accuracy. DA is more effective when the task is more challenging. On the other hand, we find DA effective as well when the amount of labeled data is small.
Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization
The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains -- distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top-1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves consistently across all datasets compared to the existing state-of-the-art.